2 research outputs found

    Systematic and fast scientific literature review for policy purposes :An exploration of using ASReview software about the effectiveness of policy instruments promoting sustainable agriculture

    Get PDF
    In the context of Article 3.1 of the compatibility law 2016 (Comptabiliteitswet) (CW 2016), which aims to bring about more scientific underpinning of policy, the House of Representatives in 2019 examined the extent to which policy proposals address the efficiency and effectiveness of the policy instruments to be deployed (Sneller &amp; Snels, 2019). They concluded that while they often address how these instruments should contribute to the objective, they often do not address to what extent.Making such statements about efficiency and effectiveness requires an understanding of scientific evidence. As a result, the Netherlands Environmental Assessment Agency (PBL) anticipates that they will receive more frequent questions from the government about what science says about the effectiveness of proposed policies. PBL already has a strong scientific orientation in its work and recognizes the great importance of a request for scientific foundation. But is also foresees that answering this type of question about scientific evidence for policy proposals can be difficult in practice. This is due to the short period in which these questions must generally be answered on the one hand and the time required for (systematic) literature review on the other. Conducting a (systematic) literature search using artificial intelligence (AI) could offer a solution to this problem and that is what this report is about.When using AI, the literature search is supported by a so-called learning algorithm, which learns as the process progresses to better and better assess what is relevant literature for the researcher. However, little experience has been gained with the use of such software in policy-oriented research. PBL therefore asked the UG to investigate how the open-source AI software ASReview could offer a solution in efficiently meeting the demand for scientific insights from the government. At PBL's request, the UG researched three substantive questions around the effectiveness of policy instruments for sustainable agriculture and used these to test the process of the AI supported literature review. In this research, the UG worked with a combination of AI-supported literature screening and sounding boards of academics who fed the search at the beginning and interpreted the results substantively at the end. Such a sounding board enables the distillation of substantive lessons in a relatively short time.The findings are: 1. It is possible to conduct a quick and good systematic literature search by combining AI-supported literature search with a sounding board of scientific experts. The search for scientific insights that could potentially provide substantiation yielded a database of 40.000 potentially relevant papers, from which a diverse set of 100 relevant papers were selected using AI. Using the sounding board of experts, an even smaller set of 12 papers was created from this list that were deemed most urgent for policymakers to study.2. Because the AI software ASReview puts content first and hides reputation of authors and journals during screening, objectivity and breadth is stimulated;3. However, ASReview does present other challenges that, if not taken into account, can compromise the objectivity of research in other ways;i) One of the main risks of using ASReview is what we call ‘trap formation’, especially when there is a short time frame for screening. This means that you end up on a particular 'track' of articles on a particular (sub)topic, which means that other also relevant articles are not found. The broader the query and the less time, the more this poses a risk to the efficiency and reliability of the screening. By using certain settings for the screening, this can be taken into account to a certain extent.ii) The amount of efficiency gains that can be achieved with ASReview depends on the breadth and multidisciplinarity of the research question. The topic of sustainable agriculture has many facets, both in terms of instruments and outcomes. This makes it more difficult for the program to quickly learn what is most relevant, so screening will take more time;iii) AI is not a fully automated process. Use of the program requires skill from the researcher to drive the algorithm and expert knowledge to start the process and expert knowledge to interpret the results.4. The content of this research has resulted in two sets of scientific articles: a Top 100 and a Top 12.5. The first end result of the Systematic Review is a list (Top 100) of relevant articles ranked by their "scientific recognition." Here, because of the requirement for speed in the process, scientific recognition is simply operationalized as a combination of the number of citations of the article per year and the impact factor of the journal (as a measure of the seriousness of the blind review process) in which they were published. While there are caveats to this method of ranking, it does provide a way to make a large amount of knowledge manageable within a relatively short period of time, and in doing so, gives policymakers and researchers a foothold that they can study "the most important first''. 6. The second end result is a selection from the top 100 by the scientific experts: which ones are most important for policy? The six multi-disciplinary scholars each selected three papers that they felt were most important for policymakers and researchers to take to task: combined, this yielded a set of 12 articles. The Top 12 is a manageable set of articles that was selected quickly, and can be studied in content in a short period of time while being largely systematic.7. In conclusion. What does the process tested here offer compared to what we might call "the standard quick search for scientific evidence" of a PBL staff member? This search often consists of manually consulting Google Scholar and/or individually contacting a scientific expert. Compared to manual searches via Scholar, searching the literature with ASReview offers the opportunity to systematically review a vast amount of studies for relevance. After each selection by the researcher, the entire database is reordered. Ultimately, it has been shown, this leads to a greater diversity of studies than a manual search process. Compared to contacting experts individually, the added value of utilizing a group of experts in this study is not only their diversity of expertise, but also that they all reflect on the same scientific dataset and choose from it (quickly and rationally) the most relevant ones for policy. This is a much more systematic process than asking for their scientific views separately.<br/
    corecore